Rapid Mixing in Markov Chains

نویسنده

  • R. Karman
چکیده

A wide class of "counting" problems have been studied in Computer Science. Three typical examples are the estimation of (i) the permanent of an n x n 0-1 matrix, (ii) the partition function of certain n— particle Statistical Mechanics systems and (iii) the volume of an n— dimensional convex set. These problems can be reduced to sampling from the steady state distribution of implicitly defined Markov Chains with exponential (in n) number of states. The focus of this talk is the proof that such Markov Chains converge to the steady state fast (in time polynomial in n). A combinatorial quantity called conductance is used for this purpose. There are other techniques as well which we briefly outline. We then illustrate on the three examples and briefly mention other examples. 2000 Mathematics Subject Classification: 68W20, 60G50.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Systematic Scan

In this thesis we study the mixing time of systematic scan Markov chains on finite spin systems. A systematic scan Markov chain is a Markov chain which updates the sites in a deterministic order and this type of Markov chain is often seen as intuitively appealing in terms of implementation to scientists conducting experimental work. Until recently systematic scan Markov chains have largely resi...

متن کامل

Rapidly Mixing Markov Chains: A Comparison of Techniques

For many fundamental sampling problems, the best, and often the only known, approach to solving them is to take a long enough random walk on a certain Markov chain and then return the current state of the chain. Techniques to prove how long “long enough” is, i.e., the number of steps in the chain one needs to take in order to be sufficiently close to the stationary distribution of the chain, ar...

متن کامل

Rapidly Mixing Markov Chains: A Comparison of Techniques (A Survey)

For many fundamental sampling problems, the best, and often the only known, approach to solving them is to take a long enough random walk on a certain Markov chain and then return the current state of the chain. Techniques to prove how long “long enough” is, i.e., the number of steps in the chain one needs to take in order to be sufficiently close to the stationary distribution of the chain, ar...

متن کامل

Markov Chains on Orbits of Permutation Groups

We present a novel approach to detecting and utilizing symmetries in probabilistic graphical models with two main contributions. First, we present a scalable approach to computing generating sets of permutation groups representing the symmetries of graphical models. Second, we introduce orbital Markov chains, a novel family of Markov chains leveraging model symmetries to reduce mixing times. We...

متن کامل

Concentration of Measure and Mixing for Markov Chains

We consider Markovian models on graphs with local dynamics. We show that, under suitable conditions, such Markov chains exhibit both rapid convergence to equilibrium and strong concentration of measure in the stationary distribution. We illustrate our results with applications to some known chains from computer science and statistical mechanics.

متن کامل

Markov chains for sampling matchings

Markov Chain Monte Carlo algorithms are often used to sample combinatorial structures such as matchings and independent sets in graphs. A Markov chain is defined whose state space includes the desired sample space, and which has an appropriate stationary distribution. By simulating the chain for a sufficiently large number of steps, we can sample from a distribution arbitrarily close to the sta...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010